human robot interaction
Sound Judgment: Properties of Consequential Sounds Affecting Human-Perception of Robots
Allen, Aimee, Drummond, Tom, Kulić, Dana
Positive human-perception of robots is critical to achieving sustained use of robots in shared environments. One key factor affecting human-perception of robots are their sounds, especially the consequential sounds which robots (as machines) must produce as they operate. This paper explores qualitative responses from 182 participants to gain insight into human-perception of robot consequential sounds. Participants viewed videos of different robots performing their typical movements, and responded to an online survey regarding their perceptions of robots and the sounds they produce. Topic analysis was used to identify common properties of robot consequential sounds that participants expressed liking, disliking, wanting or wanting to avoid being produced by robots. Alongside expected reports of disliking high pitched and loud sounds, many participants preferred informative and audible sounds (over no sound) to provide predictability of purpose and trajectory of the robot. Rhythmic sounds were preferred over acute or continuous sounds, and many participants wanted more natural sounds (such as wind or cat purrs) in-place of machine-like noise. The results presented in this paper support future research on methods to improve consequential sounds produced by robots by highlighting features of sounds that cause negative perceptions, and providing insights into sound profile changes for improvement of human-perception of robots, thus enhancing human robot interaction.
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (0.89)
- Research Report > New Finding (0.68)
Proceedings of the AI-HRI Symposium at AAAI-FSS 2022
Han, Zhao, Senft, Emmanuel, Ahmad, Muneeb I., Bagchi, Shelly, Yazdani, Amir, Wilson, Jason R., Kim, Boyoung, Wen, Ruchen, Hart, Justin W., García, Daniel Hernández, Leonetti, Matteo, Mead, Ross, Mirsky, Reuth, Prabhakar, Ahalya, Zimmerman, Megan L.
The Artificial Intelligence (AI) for Human-Robot Interaction (HRI) Symposium has been a successful venue of discussion and collaboration on AI theory and methods aimed at HRI since 2014. This year, after a review of the achievements of the AI-HRI community over the last decade in 2021, we are focusing on a visionary theme: exploring the future of AI-HRI. Accordingly, we added a Blue Sky Ideas track to foster a forward-thinking discussion on future research at the intersection of AI and HRI. As always, we appreciate all contributions related to any topic on AI/HRI and welcome new researchers who wish to take part in this growing community. With the success of past symposia, AI-HRI impacts a variety of communities and problems, and has pioneered the discussions in recent trends and interests. This year's AI-HRI Fall Symposium aims to bring together researchers and practitioners from around the globe, representing a number of university, government, and industry laboratories. In doing so, we hope to accelerate research in the field, support technology transition and user adoption, and determine future directions for our group and our research.
Autonomous Vehicles – Do We Really Know The Risks? – Human Robot Interaction
Autonomous Vehicles (AV) are the riskiest form of human-robot interaction. One the one hand they offer unparalleled improvements to the safety and comfort of drivers, passengers and other traffic participants. They also promise to reduce emission. On the other hand, they demand new considerations for trust and responsibilities in human-robot interaction. The field of tension between autonomy, trust and liability can only be manoeuvred on the basis of objective data.
Alves-Oliveira
Human-Robot Interaction (HRI) is a highly multidisciplinary endeavor. However, it often still appears to be an effort driven primarily by technical aims and concerns. We outline some of the major challenges for fruitful interdisciplinary collaboration in HRI, arguing for an improved integration of psychology and applied social sciences and their genuine research agendas. Based on our own disciplinary backgrounds, we discuss these issues from vantage points mostly originating in applied engineering and psychology, but also from relevant related fields such as sociology, communication sciences, philosophy, arts, and design. We take a project-case as an example to discuss grounded and practical challenges in HRI research, and to propose how a combination of artificial intelligence advances and a better conceptual definition of the role of social sciences in HRI research may prove to be beneficial. Our goal is to strengthen the impact and effectiveness of social scientists working in HRI, and thereby better prepare the field for future challenges.
Trott
Speakers frequently repair their speech, and listeners must therefore integrate information across ill-formed, often fragmentary inputs. Previous dialogue systems for human-robot interaction (HRI) have addressed certain problems in dialogue repair, but there are many problems that remain. In this paper, we discuss these problems from the perspective of Conversation Analysis, and argue that a more holistic account of dialogue repair will actually aid in the design and implementation of machine dialogue systems.
Episode 138: Artificial Intelligence, Sexbots and Patipolitics -- with Isabel Millar
Dr. Isabel Millar is a philosopher and cultural theorist from London. She received her PhD from Kingston University, School of Art in 2021. She holds an MA in Psychosocial Studies from Birkbeck College, University of London and a BA in Philosophy from The University of Sussex. She writes and talks about AI, sex, the body, space, culture, film and the future. Isabel is also a Research Fellow at the Centre for Critical Thought, University of Kent and Research Fellow and faculty at the Global Centre for Advanced Studies, where she teaches with GCAS' newly formed Institute of Psychoanalysis.
Bella Hadid recalls 'enormous pressure' she felt to bee seen as a 'sexbot' in early modeling days
Fox News Flash top entertainment and celebrity headlines are here. Check out what's clicking today in entertainment. Bella Hadid is looking back on her time as a model. The 24-year-old star appeared on the cover of Vogue's September issue with several other young modeling stars and discussed what it was like for her when she kick-started her career. Hadid first found success when she was just 17 and said that she found it difficult to balance a public persona with her own personality.
Good egg? Robot chef is trained to make the 'perfect' omelette
A robot has been trained to prepare and cook an omelette from breaking the egg to presenting it on a plate to the diner by a team of engineers. Researchers from the University of Cambridge worked with domestic appliance firm Beko to train the machine to create the best omelette for the majority of tastes. The team say cooking is an interesting problem for roboticists as'humans can never be totally objective when it comes to food' or how it should taste. They used machine learning data from a study of volunteers and their reaction to different omelettes cooked in a variety of ways in order to train the robot. The omelette, made by the robotic chef'general tasted great – much better than expected' according to the research team who tested the resulting dish.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.26)
- North America > United States > New York (0.05)
Researchers: Driverless cars can't just be safe. They also need to be nice.
Turns out, we may not actually want driverless cars to drive like us. That's according to researchers at the University of Michigan, who say they've found three core "personality" traits autonomous vehicles need to have to make people feel safer with them - even if they themselves don't have those same traits. Car makers already know some drivers still feel weird about the idea of sharing the road with an AV, much less actually using one. "That's partly because [autonomous vehicles] are designed purely from the technical perspectives, and those don't necessarily comply with our human social norms," says X. Jessie Yang, a professor in U of M's Department of Industrial and Operations Engineering, and the School of Information. "Like consider you're a pedestrian who wants to cross the street," she adds. "When you're interacting with a human driving a car, you'll have eye contact.
- Automobiles & Trucks (0.94)
- Transportation > Passenger (0.62)
- Transportation > Ground > Road (0.62)
- Information Technology > Robotics & Automation (0.62)
Speech-Gesture Mapping and Engagement Evaluation in Human Robot Interaction
Ghosh, Bishal, Dhall, Abhinav, Singla, Ekta
A robot needs contextual awareness, effective speech production and complementing non-verbal gestures for successful communication in society. In this paper, we present our end-to-end system that tries to enhance the effectiveness of non-verbal gestures. For achieving this, we identified prominently used gestures in performances by TED speakers and mapped them to their corresponding speech context and modulated speech based upon the attention of the listener. The proposed method utilized Convolutional Pose Machine [4] to detect the human gesture. Dominant gestures of TED speakers were used for learning the gesture-to-speech mapping. The speeches by them were used for training the model. We also evaluated the engagement of the robot with people by conducting a social survey. The effectiveness of the performance was monitored by the robot and it self-improvised its speech pattern on the basis of the attention level of the audience, which was calculated using visual feedback from the camera. The effectiveness of interaction as well as the decisions made during improvisation was further evaluated based on the head-pose detection and interaction survey.
- Research Report (0.64)
- Questionnaire & Opinion Survey (0.48)